1,993 research outputs found

    Do We Need to Put God into Emotional Support?: A Comparison of Caucasians’ and African-Americans’ Evaluations of Religious versus Non-Religious Comforting Messages

    Get PDF
    The current study explored whether ethnicity influences young adults’ evaluations of two different sets of comforting messages: those in which concepts such as God, prayer, religion, and faith are woven into low, moderate, and high person-centered strategies (called ‘‘religious strategies’’) and those in which such concepts are not embedded (called ‘‘non-religious strategies’’) into the messages. One hundred ninety-seven college students (63% African-American; 37% Caucasian) rated the sensitivity and effectiveness of religious and non-religious comforting messages. Several significant differences were observed between Caucasians and African-Americans in their evaluations of these strategies. Findings are discussed in terms of their practical implications for ‘‘real world’’ comforting efforts as well as the theoretical significance they hold for the concept of person-centeredness

    Character Sequence Models for ColorfulWords

    Full text link
    We present a neural network architecture to predict a point in color space from the sequence of characters in the color's name. Using large scale color--name pairs obtained from an online color design forum, we evaluate our model on a "color Turing test" and find that, given a name, the colors predicted by our model are preferred by annotators to color names created by humans. Our datasets and demo system are available online at colorlab.us

    Implementing Marginal Cost Pricing of Rail Infrastructure-Barriers and Solutions

    Get PDF

    The MC@NLO 4.0 Event Generator

    Get PDF
    This is the user's manual of MC@NLO 4.0. This package is a practical implementation, based upon the Fortran HERWIG and Herwig++ event generators, of the MC@NLO formalism, which allows one to incorporate NLO QCD matrix elements consistently into a parton shower framework. Processes available in this version include the hadroproduction of single vector and Higgs bosons, vector boson pairs, heavy quark pairs, single top, single top in association with a W, single top in association with a charged Higgs in type I or II 2HDM models, lepton pairs, and Higgs bosons in association with a W or Z. Spin correlations are included for all processes except ZZ production. This document is self-contained, but we emphasise the main differences with respect to previous versions.Comment: 36 pages, no figure

    This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language Models

    Full text link
    Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language model (LLM) may answer differently if asked in the languages of each claimant country: Chinese, Tagalog, or Vietnamese. This contrasts with a multilingual human, who would likely answer consistently. In this work, we show that LLMs recall geopolitical knowledge inconsistently across languages -- a phenomenon we term geopolitical bias. As a targeted case study, we consider territorial disputes, inherently controversial and cross-lingual task. We first introduce the BorderLines dataset of territorial disputes. This covers 256 territories, each of which is associated to a set of multiple-choice questions in the languages of each claimant country (48 languages total). We then pose these questions to LLMs to probe their internal knowledge. Finally, we propose a suite of evaluation metrics based on accuracy, which compares responses with respect to the actual geopolitical situation, and consistency of the responses in different languages. These metrics allow us to quantify several findings, which include instruction-tuned LLMs underperforming base ones, and geopolitical bias being amplified in stronger models. We release our code and dataset to facilitate future investigation and mitigation of geopolitical bias

    PAXQA: Generating Cross-lingual Question Answering Examples at Training Scale

    Full text link
    Existing question answering (QA) systems owe much of their success to large, high-quality training data. Such annotation efforts are costly, and the difficulty compounds in the cross-lingual setting. Therefore, prior cross-lingual QA work has focused on releasing evaluation datasets, and then applying zero-shot methods as baselines. In this work, we propose a synthetic data generation method for cross-lingual QA which leverages indirect supervision from existing parallel corpora. Our method termed PAXQA ({P}rojecting {a}nnotations for cross-lingual ({x}) QA) decomposes cross-lingual QA into two stages. In the first stage, we apply a question generation (QG) model to the English side. In the second stage, we apply annotation projection to translate both the questions and answers. To better translate questions, we propose a novel use of lexically-constrained machine translation, in which constrained entities are extracted from the parallel bitexts. We release cross-lingual QA datasets across 4 languages, totaling 662K QA examples. We then show that extractive QA models fine-tuned on these datasets outperform both zero-shot and prior synthetic data generation models, showing the sufficient quality of our generations. We find that the largest performance gains are for cross-lingual directions with non-English questions and English contexts. Ablation studies show that our dataset generation method is relatively robust to noise from automatic word alignments

    The Evolution of X-Ray Clusters in a Cold Plus Hot Dark Matter Universe

    Full text link
    We present the first self-consistently computed results on the evolution of X-ray properties of galaxy clusters in a Cold + Hot Dark Matter (CHDM) model. We have performed a hydrodynamic plus N-body simulation for the COBE-compatible CHDM model with standard mass components: Omega(hot) = 0.3, Omega(cold) = 0.6 and Omega(baryon) = 0.1 (h = 0.5). In contrast with the CDM model, which fails to reproduce the observed temperature distribution function dN/dT (Bryan et al. 1994b), the CHDM model fits the observational dN/dT quite well. Our results on X-ray luminosity are less firm but even more intriguing. We find that the resulting X-ray luminosity functions at redshifts z = 0.0, 0.2, 0.4, 0.7 are well fit by observations, where they overlap. The fact that both temperatures and luminosities provide a reasonable fit to the available observational data indicates that, unless we are missing some essential physics, there is neither room nor need for a large fraction of gas in rich clusters: 10% (or less) in baryons is sufficient to explain their X-ray properties. We also see a tight correlation between X-ray luminosity and gas temperature.Comment: 11 pages, 3 figures uuencoded postscript file, (92 kb), accepted for publication in Astrophysical Journal Letters. Also available via anonymous ftp at zeus.ncsa.uiuc.edu in gc3/publications/gc3005, LCA01

    User-Relative Names for Globally Connected Personal Devices

    Full text link
    Nontechnical users who own increasingly ubiquitous network-enabled personal devices such as laptops, digital cameras, and smart phones need a simple, intuitive, and secure way to share information and services between their devices. User Information Architecture, or UIA, is a novel naming and peer-to-peer connectivity architecture addressing this need. Users assign UIA names by "introducing" devices to each other on a common local-area network, but these names remain securely bound to their target as devices migrate. Multiple devices owned by the same user, once introduced, automatically merge their namespaces to form a distributed "personal cluster" that the owner can access or modify from any of his devices. Instead of requiring users to allocate globally unique names from a central authority, UIA enables users to assign their own "user-relative" names both to their own devices and to other users. With UIA, for example, Alice can always access her iPod from any of her own personal devices at any location via the name "ipod", and her friend Bob can access her iPod via a relative name like "ipod.Alice".Comment: 7 pages, 1 figure, 1 tabl
    • …
    corecore